Goto

Collaborating Authors

 pause ai


Protesters Are Fighting to Stop AI, but They're Split on How to Do It

WIRED

On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. When do we want it?" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and a handful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit--a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message. "The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." The group's main demand is for a pause on the training of AI systems more powerful than GPT-4--it's calling for all countries to implement this measure, but specifically calls out the United States as the home of most leading AI labs. The group also wants all UN member states to sign a treaty that sets up an international AI safety agency with responsibility for granting new deployments of AI systems and training runs of large models. Their protests are taking place on the same day as OpenAI announced a new version of ChatGPT to make the chatbot act more like a human. "We have banned technology internationally before," says Meindertsma, pointing to the Montreal Protocol, a global agreement finalized in 1987 that saw the phaseout of CFCs and other chemicals known to deplete the ozone layer. "We've got treaties that ban blinding laser weapons.


What's changed since the "pause AI" letter six months ago?

MIT Technology Review

Well, that didn't happen, obviously. I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Here are highlights of our conversation. On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had become clear that there was a huge amount of anxiety about the existential risk AI poses, but nobody felt they could speak about it openly "for fear of being ridiculed as Luddite scaremongerers." "The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns," he says.

  Country: North America > United States (0.17)
  Industry: Government (1.00)

Signal's Meredith Whittaker: 'These are the people who could actually pause AI if they wanted to'

The Guardian

Meredith Whittaker is the president of Signal – the not-for-profit secure messaging app. The service, along with WhatsApp and similar messaging platforms, is opposing the UK government's online safety bill which, among other things, seeks to scan users' messages for harmful content. Prior to Signal, Whittaker worked at Google, co-founded NYU's AI Now Institute and was an adviser to the Federal Trade Commission. After 10 years at Google you organised the walkout over the company's attitude to sexual harassment accusations, after which in 2019 you were forced out. How did you feel about that?


Should We Pause AI?

#artificialintelligence

At a recent White House press conference, a Fox News correspondent asked the Biden administration's press secretary about AI safety researcher Eliezer Yudkowsky's highly publicized claim that if we don't pause or halt the development of artificial intelligence, then "literally everyone on earth will die." The question was met with some laughter from the White House press corps. But as someone with a technical background who covers AI and talks regularly to researchers, developers, and investors in the field, I saw nothing to chuckle at. Rather, I and other more optimistic AI watchers worry that overly dire warnings of imminent AI-driven destruction may cause us to pause or halt the development of a powerful technology with immense potential for improving our lives. Insiders hold a truly wide range of opinions on the best way to approach AI--from Yudkowsky's insistence that we immediately abandon all research in the area, to my own more moderate concern about large-scale industrial accidents arising from misuse of the technology, to an extreme optimism in some quarters about AI's potential to turn humanity into an immortal, star-spanning species.


Humanity is in danger if the world does not pause AI 'experiments', 1,000 experts including Elon Musk warn

#artificialintelligence

Humanity is in danger from "AI experiments" and they must be paused to ensure that we are not at risk, according to more than 1,000 experts. Researchers need to stop working on the development of new artificial intelligence systems for the next six months – and if they will not, then governments need to step in, they warned. That is the grave conclusion of a new open letter signed by experts including academics in the field and technology leaders including Elon Musk and Apple co-founder Steve Wozniak. The letter notes that the positive possibilities of AI are significant. It says that humanity "can enjoy a flourishing future" with the technology, and that we can now enjoy an "AI summer" in which we adapt to what has already been created.